Internet Traffic Tends Toward Poisson and Independent as the Load Increases

نویسندگان

  • Jin Cao
  • William S. Cleveland
  • Dong Lin
  • Don X. Sun
چکیده

Network devices put packets on an Internet link, and multiplex, or superpose, the packets from different active connections. Extensive empirical and theoretical studies of packet traffic variables — arrivals, sizes, and packet counts — demonstrate that the number of active connections has a dramatic effect on traffic characteristics. At low connection loads on an uncongested link — that is, with little or no queueing on the link-input router — the traffic variables are long-range dependent, creating burstiness: large variation in the traffic bit rate. As the load increases, the laws of superposition of marked point processes push the arrivals toward Poisson, the sizes toward independence, and reduces the variability of the counts relative to the mean. This begins a reduction in the burstiness; in network parlance, there are multiplexing gains. Once the connection load is sufficiently large, the network begins pushing back on the attraction to Poisson and independence by causing queueing on the link-input router. But if the link speed is high enough, the traffic can get quite close to Poisson and independence before the push-back begins in force; while some of the statistical properties are changed in this high-speed case, the push-back does not resurrect the burstiness. These results reverse the commonly-held presumption that Internet traffic is everywhere bursty and that multiplexing gains do not occur. Very simple statistical time series models — fractional sum-difference (FSD) models — describe the statistical variability of the traffic variables and their change toward Poisson and independence before significant queueing sets in, and can be used to generate open-loop packet arrivals and sizes for simulation studies. Both science and engineering are affected. The magnitude of multiplexing needs to become part of the fundamental scientific framework that guides the study of Internet traffic. The engineering of Internet devices and Internet networks needs to reflect the multiplexing gains. I. ARE THERE MULTIPLEXING GAINS? When two hosts communicate over the Internet — for example, when a PC and a Web server communicate for the purpose of sending a Web page from the server to the PC — the two hosts set up a connection. One or more files are broken up into pieces, headers are added to the pieces to form packets, and the two This paper is to be published in Nonlinear Estimation and Classification, eds. C. Holmes, D. Denison, M. Hansen, B. Yu, and B. Mallick, Springer, New York, 2002. The authors are in the Computing and Mathematical Sciences Research Division, Bell Labs, Murray Hill, NJ. fcao, wsc, dong, [email protected] hosts send packets to one another across the Internet. When the transfer is completed, the connection ends. The headers, typically 40 bytes in size, contain much information about the packets such as their source, destination, size, etc. In addition there are 40byte control packets, all header and no file data, that transfer information form one host to the other about the state of the connection. The maximum amount of file information allowed in a packet is 1460 bytes, so packets vary in size from 40 bytes to 1500 bytes. Each packet travels across the Internet on a path made up of devices and transmission links between these devices. The devices are the two hosts at the ends and routers in-between. Each device sends the packet across a transmission link to the next device on the path. The physical medium, or the “wire”, for a link might be a telephone wire from a home to a telephone company, or a coaxial cable in a university building, or a piece of fiber connecting two devices on the network of an Internet service provider. So each link has two devices, the sending device that puts the bits of the packet on the link, and the receiving device, which receives the bits. Each router serves as a receiving device for one or more input links and as the sending device for one or more output links; it receives the packet on an input link, reads the header to determine the output link, and sends bits of the packet. Each link has a speed: the rate at which the bits are put on the wire by the sending device and received by the receiving device. Units are typically kilobits/sec (kbps), megabits/sec (mbps), or gigabits/sec (gbps). Typical speeds are 56 kbps, 1.5 mbps, 10 mbps, 100 mbps, 156 mbps, 622 mbps, 1 gbps, and 2.5 gbps. The transmission time of a packet on a link is the time it takes to put all of the bits of the packet on the link. For example, the transmission time for a 1500 byte (12000 bit) packet is 120 s at 100 mbps and 12 s at 1 gbps. So packets pass more quickly on a higherspeed link than on a lower-speed one. The packet traffic on a link can be modeled as a marked point process. The arrival times of the process are the arrival times of the packets on the link; INTERNET TRAFFIC TENDS TOWARD POISSON AND INDEPENDENT 2 a packet arrives at the moment its first bit appears on the link. The marks of the process are the packet sizes. An Internet link typically carries the packets of many active connections between pairs of hosts. The packets of the different connections are intermingled on the link; for example, if there are three active connections, the arrival order of 10 consecutive packets by connection number might be 1, 1, 2, 3, 1, 1, 3, 3, 2, and 3. This intermingling is referred to as “statistical multiplexing” in the Internet engineering literature, and as “superposition” in the literature of point processes. If a link’s sending device cannot put a packet the link because it is busy with one or more other packets that arrived earlier, then the device puts the packet in a queue, physically, a buffer. Queueing on the device delays packets, and if it gets bad enough, and the buffer size is exceeded, packets are dropped. This reduces the quality of Internet applications such as Web page downloads and streaming video. Consider a specific link. Queueing of the packet in the buffer of the link’s sending device is upstream queueing; so is queueing of the packet on sending devices that processed the packet earlier on its flight from sending host to receiving host. Queueing of the packet on the receiving device, as well as on devices further along on its path is downstream queueing. All along the path from one host to another, the statistical characteristics of the packet arrivals and their sizes on each link affect the downstream queueing, particularly the queueing on the link receiving device. The most accommodating traffic would have arrivals and sizes on the link that result in a traffic rate in bits/sec that is constant; this would be achieved if the packet sizes were constant (which they are not) and if they arrived at equally spaced points in time (which they do not). In this case we would know exactly how to engineer a link of a certain speed; we would allow a traffic rate equal to the link speed. There would be no queueing and no device buffers. The utilization, the ratio of the traffic rate divided by the link speed, would be 100%, so the transmission resources would be used the most efficiently. If the link speed were 1.5 mbps, the traffic rate would be 1.5 mbps. Suppose instead, that the traffic is stationary with Poisson arrivals and independent sizes. There would be queueing, so a buffer is needed. Here is how we would engineer the link to get good performance. Suppose the speed is 1.5 mbps. We choose a buffer size so that a packet arriving when the queue is nearly full would not have to wait more than about 500 ms; for 1.5 mbps this would be about 100 kilobytes, or 800 kilobits. An amount of traffic is allowed so that only a small percentage of packets are dropped, say 0.5%. For this Poisson and independent traffic, we could do this and and achieve a utilization of 95%, so the traffic rate would be 1.425 mbps. Unfortunately, we do not get to choose the traffic characteristics. They are dictated by the engineering protocols that underlie the Internet. What can occur is far less accommodating than traffic that has a constant bit rate, or traffic that is Poisson and independent. The traffic can be very bursty. This means the following. The packet sizes and inter-arrival times are sequences that we can treat as time series. Both sequences can have persistent, long-range dependence; this means the autocorrelations are positive and fall of slowly with the lag k, for example, like k where 0 < 1. Long-range dependent time series have long excursions above the mean and long excursions below the mean. Furthermore, for the sizes and interarrivals, the coefficient of variation, the standard deviation divided by the mean, can be large, so the excursions can be large in magnitude as well as long in time. The result is large downstream queue-height distributions with large variability. Now, when we engineer a link of 1.5 mbps, utilizations would be much lower, about 40%, which is a traffic rate of 0.6 mbps. Before 2000, this long-range dependence had been established for links with relatively low link speeds and therefore low numbers of simultaneous active connections, or connection loads, and therefore low traffic rates. But beginning in 2000, studies were undertaken to determine if on links with higher speeds, and therefore greater connection loads, there were effects due to the increased statistical multiplexing. Suppose we start out with a small number of active connections. What happens to the statistical properties of the traffic as we increase the connection load? In other words, what is the effect of the increase in magnitude of the multiplexing? We would expect that the statistical properties change in profound ways, not just simply that the mean of the inter-arrivals decreases. Does the long-range dependence dissipate? Does the traffic tend toward Poisson and independent, as suggested by the superposition theory of marked point processes? This would mean that the link utilization resulting from the above engineering method increase. In network parlance, are there would be multiplexing gains. In this article, we review the results of the new studies on the effect of increased statistical multiplexing on the statistical properties of packet traffic on an Internet link. INTERNET TRAFFIC TENDS TOWARD POISSON AND INDEPENDENT 3 II. THE VIEW OF THE INTERNET CIRCA 2000 The study of Internet traffic beginning in the early 1990s resulted in extremely important discoveries in two pioneering articles [1], [2]: counts of packet arrivals in equally-spaced consecutive intervals of time are long-range dependent and have a large coefficient of variation (ratio of the standard deviation to the mean), and packet inter-arrivals have a marginal distribution that has a longer tail than the exponential. This means the arrivals are not a Poisson process because the counts of a Poisson are independent and the inter-arrivals are exponential. The title of the second article, “Wide-Area Traffic: The Failure of Poisson Modeling”, sent a strong message that the old Poisson models for voice telephone networks would not do for the emerging Internet network. And because queue-height distributions for long-range dependent traffic relative to the average bit/rate are much greater than for Poisson processes, it sent a signal that Internet technology would have to be quite different from telephone network technology. The discovery of long-range dependence was confirmed in many other studies (e.g., [3], [4], [5]). The work on longrange dependence drew heavily on the brilliant work of Mandelbrot [6], both for basic concepts and for methodology. Models of source traffic were put forward to explain the traffic characteristics [3], [7], [8], [9]. The sizes of transferred files utilizing a link vary immensely; to a good approximation, the upper tail of the file size distribution is Pareto with a shape parameter that is often between 1 and 2, so the mean exists but not the variance. A link sees the transfer of files whose sizes vary by many orders of magnitude. Modeling the link traffic began with an assumption of a collection of on-off traffic sources, each on (with a value of 1) when the source was transferring a file over the link, and off (with a value of 0) when not. Since the model has no concept of packets, just connections, multiplexing becomes summation; the link traffic is a sum, or aggregate, of source processes. Because of the heavy tail of the on process, the summation is long-range dependent, and for a small number of source processes, has a large coefficient of variation. We will refer to this as the on-off aggregation theory. Before 2000, there was little empirical study of packet arrivals and sizes. Most of the intuition, theory, and empirical study of the Internet was based on a study of packet and byte counts. It took some time for articles to appear in the literature showing packet inter-arrivals and packet sizes are long-range dependent, although one might have guessed this from the results for counts. The first report in the literature of which we are aware appeared in 1999 [5]. The first articles of which we are aware that sizes are long-range dependent appeared in 2001 [10], [11]. While there was no comprehensive empirical study of the effect of multiplexing, before 2000 there were theoretical investigations. Some of the early, foundations-setting articles on Internet traffic contained conjectures that multiplexing gains did not occur. Leland et al. [1] wrote: We demonstrate that Ethernet LAN traffic is statistically self-similar, : : : and that aggregating streams of such traffic typically intensifies the self-similarity (‘burstiness’) instead of smoothing it. Crovella and Bestavros [3] wrote: One of the most important aspects of self-similar traffic is that there is is no characteristic size of a traffic burst; as a result, the aggregation or superposition of many such sources does not result in a smoother traffic pattern. Further consideration and discussion however suggested that issues other than long-range dependence needed to be considered. Erramilli et al. [12] wrote : : : the FBM [fractional Brownian motion] model does predict significant multiplexing gains when a large number of independent sources are multiplexed, the relative magnitude is reduced by p n : : : . Floyd and Paxson [7] wrote: : : : we must note that it remains an open question whether in highly aggregated situations, such as on Internet backbone links, the correlations [of longrange dependent traffic], while present, have little actual effect because the variance of the packet arrival process is quite small. In addition, there were theoretical discussions of the implications of increased multiplexing on queueing [13], [14], [15], [16], [17]. But the problem with such theoretical study is that results depend on the assumptions about the individual traffic sources being superposed, and different plausible assumptions lead to different results. Without empirical study, it was not possible to resolve the uncertainty about assumptions. With no clear empirical study to guide judgment, many subscribed to a presumption that multiplexing gains did not occur, or were too small to be relevant. For example, Listani et al. [18] wrote: : : : traffic on Internet networks exhibits the same characteristics regardless of the number of simultaneous sessions on a given physical link. INTERNET TRAFFIC TENDS TOWARD POISSON AND INDEPENDENT 4 Internet service providers acted on this presumption in designing and provisioning networks, and equipment designers acted on it in designing devices. III. FOUNDATIONS: THEORY AND EMPIRICAL STUDY Starting in 2000, a current of research was begun to determine the effect of increased multiplexing on the statistical properties of many Internet traffic variables, to determine if multiplexing gains occurred [10], [19], [20], [21], [22]. The empirical study of byte and packet counts of previous work was enlarged to include a study of arrivals and sizes. Of course, much can be learned from counts, but arrivals and sizes are the more fundamental traffic variables. It is arriving packets with varying sizes that network devices process, not aggregations of packets in fixed intervals, and packet and byte counts are derived from arrivals and sizes, but not conversely. In keeping with a focus on arrivals and sizes, the superposition theory of marked point processes became a guiding theoretical framework, replacing the on-off aggregation theory that was applicable to counts but not arrivals and sizes [23], [24]. The two theories are quite different. For the on-off aggregation theory, one considers a sum of independent random variables, and a central limit theorem shows the limit is a normal distribution. For the superposition theory, in addition to the behavior of sums, one considers a superposition of independent marked point processes, and a central limit theorem shows the limit is a Poisson point process with independent marks, and quite importantly, the theorem applies even when the inter-arrivals and marks of each superposed source point process are long-range dependent. The following discussion draws largely on the very detailed account in [19]. We will consider packet arrivals and sizes, and packet counts in fixed intervals. We omit the discussion of byte counts since their behavior is much like that of the packet counts. IV. THEORY: POISSON AND INDEPENDENCE Let aj , for j = 1; 2; : : : be the arrival times of packets on an Internet link where j = 1 is for the first packet, j =2 is for the second packet, and so forth. Let tj = aj+1 aj be the inter-arrival times, and let qj be the packet sizes. We treat aj and qj as a marked point process. aj, tj , and qj are studied as time series in j. Suppose we divide time into equally-spaced intervals, [ i; (i + 1)), for i = 1, 2, : : : where might be 1 ms or 10 ms or 100 ms. Let pi be the packet count, the number of arrivals in interval i. The pi are studied as a time series in i. Suppose the packet traffic is the result of multiplexing m traffic sources on the link. Each source has packet arrival times, packet sizes, and packet counts. The arrival times aj and the sizes qj of the superposition marked point process result from the superposing of the arrivals and sizes of the m source marked point processes. The packet count pi of the superposition process in interval i results from summing the m packet counts for the m sources in interval i; theoretical considerations for the pi are, of course, the same as those for the on-off aggregation theory described earlier. Provided certain assumptions hold, the superposition theory of marked point processes prescribes certain behaviors for aj, tj, qj , and pi as m increases [24]. The arrivals aj tend toward Poisson, which means the inter-arrivals tj tend toward independent and their marginal distribution tends toward exponential. The sizes qj tend toward independent, but there is no change in their marginal distribution. As discussed earlier, the tj and qj have been shown to be long-range dependent for small m. Thus the theory predicts that the long-range dependence of the tj and the qj dissipates. But the autocorrelation of the packet counts pi does not change with m so its longrange dependence is stable. However, the standard deviation relative to the mean, the coefficient of variation, falls off like 1= p m. This means that the burstiness of the counts dissipates as well; the durations of excursions of pi above or below the mean, which are long because of the long-range dependence, do not change because the correlation stays the same, but the magnitudes of the excursions get smaller and smaller because the statistical variability decreases. The following assumptions for the source packet processes lead to the above conclusions: homogeneity: they have the same statistical properties. stationarity: their statistical properties do not change through time. independence: they are independent of one another and the size process of each is independent of the arrival process. non-simultaneity: the probability of two or more packet arrivals for a source in an interval of length w is o(w) where o(w)=w tends to zero as w tends to zero. We cannot take the source processes to be the individual connections; they are not stationary, but rather INTERNET TRAFFIC TENDS TOWARD POISSON AND INDEPENDENT 5 transient, that is, that have a start time and a finish time. Instead, we randomly assign each connection, to one of m source processes. Suppose the start times are a stationary point process, and let be the arrival rate. Then the arrival rate for each source process is =m. We let !1, keeping =m fixed to a number sufficiently large that the source processes are stationary; so m!1. We refer to the formation of the source processes, the assumptions about them, and the implications, as the superposition theory. It is surely true that all we have done with this theory is to reduce our uncertainty about whether the superposition process is attracted to Poisson and independent with an uncertainty about whether the above construction creates source processes that satisfy the assumptions. But it is a least plausible, although by no means certain, that there are cases where the source process satisfies the above assumptions over a range of values of m. What we have done is to create a plausible hypothesis to be tested by empirical study which we describe shortly. V. THEORY: THE NETWORK PUSHES BACK While we cannot verify the hypotheses of the superposition theory without empirical study, we can at least quite convincingly describe a way in which the network can push back and defeat assumptions. Once m is large enough, significant link-input queueing begins, and then grows as m gets larger still; at some point, the queueing will be large enough that the assumptions of independence of the different source processes and of independence of the inter-arrivals and the sizes of each source process, no longer serve as good approximations in describing the behavior of the source processes. (A small amount of queueing, which almost always occurs, does not invalidate the approximation.) Consider two packets, j = 19 and j = 20. Suppose packet 20 waits in the queue for packet 19 to be transmitted. The two are back-to-back on the link, which means, because the arrival time is the first moment of transmission, that t19 is the time to put the bits of packet 19 on the link, which is equal to q19=`, where ` is the link speed. For example, at ` = 100 mbps, the time for a 1500 byte (12000 bit) packet is 120 . So given q19 we know t19 exactly. Queueing can occur on routers further upstream than the link-input router and affect the assumptions as well. The arrival times of the packets on the link, aj , are the departure times of the packets from the queue. The departure times are the arrival times at the queue plus the time spent in the queue. If there are no other packets in the queue when packet j arrives, then aj is also the arrival time at the queue. Suppose queueing is first-in-first-out. Then the order of the arriving packets at the queue is the same as the order of departing packets from the queue, so qj is also the packet size process for the arrivals at the queue. The effect of queueing on qj is simple. Because the queueing does not alter the qj , the statistical properties of the qj are unaffected by the queueing; in particular, their limit of independence is not altered.. But statistical theory for the departure times from a queue is not developed well enough to provide much guidance for the affect of queueing on the statistical properties of tj and pi. However, the properties of the extreme case are clear. If m is so large that the queue never drains, then the tj are equal to qj=`, so the tj take on the statistical properties of qj . Since the qj tend to independence, the tj eventually go to independence, so there is no long-range dependence. A Poisson process is a renewal process, a point process with independent inter-arrivals, with the added property that the marginal distribution of the inter-arrivals is exponential. The extreme tj process is a renewal process but with a marginal distribution proportional to that of the packet sizes. The extreme pi is the count process corresponding to the tj renewal process; this implies the coefficient variation of pi is a constant, so the decrease like 1= p m prescribed by the superposition theory is arrested, and it implies the pi are independent, so there is no long-range dependence. We do not expect to see the extreme case in our empirical study, but it does provide at least a point of attraction. VI. EMPIRICAL STUDY: INTRODUCTION The superposition theory and the heuristic discussion of the effect of upstream queueing provide hypotheses about the statistical properties of the interarrivals tj, the sizes qj , and the counts pi. We carried out extensive empirical studies to investigate the validity of the hypotheses [10], [19], [25]. In the early 1990s, Internet researchers put together a comprehensive measurement framework for studying the characteristics of packet traffic that allows not just statistical study of traffic, but performance studies of Internet engineering designs, protocols, and algorithms [26], [27]. The framework consists of capturing the headers of all packets arriving on a link and time-stamping the packet, that is, measuring the arrival time, aj . The result of measuring over an interval of time is a packet trace. Packet trace collection today enjoys a very high degree of accuracy and effectiveness for traffic study [28], [29]. INTERNET TRAFFIC TENDS TOWARD POISSON AND INDEPENDENT 6 Trace Group Number Link log(c) AIX1(90sec) 23 622mbps 13.09 AIX2(90sec) 23 622mbps 13.06 COS1(90sec) 90 156mbps 10.83 COS2(90sec) 90 156mbps 10.81 NZIX(5min) 100 100 mbps 10.75 NZIX7(5min) 100 100 mbps 9.60 NZIX5(5min) 100 100 mbps 8.66 NZIX6(5min) 100 100 mbps 7.85 NZIX2(5min) 100 100 mbps 7.32 NZIX4(5min) 100 100 mbps 7.17 BELL(5min) 500 100 mbps 6.97 NZIX3(5min) 100 100 mbps 6.54 BELL-IN(5min) 500 100 mbps 5.98 BELL-OUT(5min) 500 100 mbps 5.94 NZIX1(5min) 100 100 mbps 4.42 TABLE I LINK: NAME INCLUDING LENGTH OF TRACES NUMBER: NUMBER OF TRACES LINK: SPEED log(c): LOG BASE 2 AVERAGE NUMBER OF ACTIVE CONNECTIONS We put together a very large database of packet traces measuring many Internet links whose speeds range from 10 mbps to 2.5 gbps, and we built S-Net, a software system, based on the S language for graphics and data analysis, for analyzing very large packet header databases [20]. We put the database and S-Net work to study the multiplexing hypotheses. For each studied trace, which covers a specific block of time on a link, we compute aj , tj , qj, and 100-ms pi. We also need a summary measure of the magnitude of multiplexing for the trace. At each point in time over the trace, the measure is the number of active connections. The summary measure, c, for the whole trace is the average number of active connections over all times in the trace. Here, we describe some of the results of one of our empirical investigations in which we analyzed 2526 header packet traces, 5 min or 90 sec in duration, from 6 Internet monitors measuring 15 links ranging from 100 mbps to 622 mbps [19]. Table I shows information about the traces. Each row describes the traces for one link. The first column gives the trace group name: the trace length is a part of each name. Column 2 gives the number of traces. Column 3 gives the link speed. Column 4 gives the mean of the log base 2 of c for the traces of the link. Consider each packet in a trace. Arriving after it is a back-to-back run of k packets, for k = 0; 1; : : :; each packet in the run is back-to-back with its predecessor. If packet 19 has a back-to-back run of 3 packets, then packet 20 is back-to-back with 19, 21 is back-to-back with 20, 22 is back-to-back with 21, but 23 is not back-to-back with 22. The percent of packets with back-to-back runs of k or more is a measure of the amount of queueing on the link-input router. We studied this measure for many values of k. We need such study to indicate when the network is likely pushing back on the attraction to Poisson and independence. Figure 1 graphs the percent of packets whose backto-back runs are 3 or greater against log(c). Each point on the plot is one trace. Each of the 15 panels contains the points for one link. The panels are ordered, left to right and bottom to top, by the means of the log(c) for the 15 links, given in column 4 of Table I. Figure 1, and others like it for different values of k, show that only four links experience more than minor queueing — COS1, COS2, AIX1, and AIX2 — so we would not expect to see significant push-back except at these four. However, queueing further upstream than the link-input router can affect the traffic properties as well, but without creating back-to-back packets, so we reserve final judgment until we see the coming analyses. Figure 1 also provides information about the values of c. Since the mean of log(c) increases left to right and bottom to top, the distribution shifts generally toward higher values in this order. The smallest c, which appears in the lower left panel, is 5.9 connections; the largest, which appears in the upper right panel, is 16164 connections. VII. EMPIRICAL STUDY: FSD AND FSD-MA(1) MODELS In this section we introduce two very simple classes of stationary time series models [25], one a subclass of the other, that we found provide excellent fits to the inter-arrivals tj , the sizes qj , and the counts pi for the 2526 traces. The models are parametric. One of the parameters determines the amount of dependence. At low values of the parameter, the series has substantial autocorrelation and is long-range dependent. As the parameter increases, the amount of dependence decreases. At the largest value of the parameter, the series is independent. Other parameters determine the marginal distribution of the series and therefore the coefficient of variation. By fitting the models to each trace, we can study the multiplexing gains by studying the changing values of the parameters across the traces, and relating the changes to the average active connection load c of the traces. The two model classes are fractional sumdifference (FSD) models and FSD-MA(1) models [25]. FSD models have two additive components: INTERNET TRAFFIC TENDS TOWARD POISSON AND INDEPENDENT 7 . ... . . . . .. .... . .. . . .. 0 10 20 30 NZIX1(5m) 4 6 8 10 12 14 .. .. . . .. . .. . . . . . .. . .. . . . . . ... .. . . .. . ... .. . BELL-OUT(5m) . . . . . . . . . .... ... . .... . . .. . . . . BELL-IN(5m) 4 6 8 10 12 14 . . . ...... ... . NZIX3(5m) . ... . . . . . . .... . . . . . .... ... ... . . . BELL(5m) .. .. . .. ..... 0 10 20 30 NZIX4(5m) . ..... ...... . . 0 10 20 30 NZIX2(5m) .. .... . .... NZIX6(5m) ........ NZIX5(5m) ..... ... .... NZIX7(5m)

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Internet Traffic Tends To Poisson and Independent as the Load Increases

The burstiness of Internet traffic was established in pioneering work in the early 1990s, which demonstrated that packet arrival times are not Poisson, and packet and byte counts in fixed-length intervals are long-range dependent [17, 20]. Here we demonstrate that these results are one end of a continuum of traffic characteristics. At the other end are Poisson behavior and independence. Our stu...

متن کامل

Effect of TCP on self-similarity of network traffic

It is now well known that Internet traffic exhibits self-similarity, which cannot be described by traditional Markovian models such as the Poisson process. In this work, we simulate a simple network with a full implementation of TCPReno. We also assume Poisson arrivals at the application layer specifically to determine whether TCP can cause self-similarity even when input traffic does not exhib...

متن کامل

شیوه های توزیع بار در مهندسی ترافیک

Because of rapidly rising network traffic, ISP providers are trying to create new network structures and extend more resources to control the growth of demands. It is important to efficiently split the network bandwidth among different sources so that each user has enough bandwidth.&nbsp; Traffic engineering is used to achieve this goal.&nbsp;&nbsp; Performing reliable and efficient network ope...

متن کامل

Characterizing Internet load as a non-regular multiplex of TCP streams

A commonly accepted traffic model for a large population of Internet users consists of a multiplex of Poisson-arriving heavy-tailed streams with the same constant rate (M/G/ ). We show that even though such regular model provides an accurate description of long-range dependence, the marginal distribution variance is underestimated, resulting in erroneous calculation of overflow probability in n...

متن کامل

Variance of Aggregated Web Traffic

If data traffic were Poisson, increases in the amount of traffic aggregated on a network would rapidly decrease the relative size of bursts. The discovery of pervasive long-range dependence demonstrates that real network traffic is burstier than any possible Poisson model. We present evidence that, despite being non-Poisson, aggregating Web traffic causes it to smooth out as rapidly as Poisson ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002